54 research outputs found

    Topology-Aware Surface Reconstruction for Point Clouds

    Get PDF
    We present an approach to inform the reconstruction of a surface from a point scan through topological priors. The reconstruction is based on basis functions which are optimized to provide a good fit to the point scan while satisfying predefined topological constraints. We optimize the parameters of a model to obtain likelihood function over the reconstruction domain. The topological constraints are captured by persistence diagrams which are incorporated in the optimization algorithm promote the correct topology. The result is a novel topology-aware technique which can: 1.) weed out topological noise from point scans, and 2.) capture certain nuanced properties of the underlying shape which could otherwise be lost while performing surface reconstruction. We showcase results reconstructing shapes with multiple potential topologies, compare to other classical surface construction techniques, and show the completion of real scan data

    A Survey of Surface Reconstruction from Point Clouds

    Get PDF
    International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction

    VNT-Net: Rotational Invariant Vector Neuron Transformers

    Full text link
    Learning 3D point sets with rotational invariance is an important and challenging problem in machine learning. Through rotational invariant architectures, 3D point cloud neural networks are relieved from requiring a canonical global pose and from exhaustive data augmentation with all possible rotations. In this work, we introduce a rotational invariant neural network by combining recently introduced vector neurons with self-attention layers to build a point cloud vector neuron transformer network (VNT-Net). Vector neurons are known for their simplicity and versatility in representing SO(3) actions and are thereby incorporated in common neural operations. Similarly, Transformer architectures have gained popularity and recently were shown successful for images by applying directly on sequences of image patches and achieving superior performance and convergence. In order to benefit from both worlds, we combine the two structures by mainly showing how to adapt the multi-headed attention layers to comply with vector neurons operations. Through this adaptation attention layers become SO(3) and the overall network becomes rotational invariant. Experiments demonstrate that our network efficiently handles 3D point cloud objects in arbitrary poses. We also show that our network achieves higher accuracy when compared to related state-of-the-art methods and requires less training due to a smaller number of hyperparameters in common classification and segmentation tasks.Comment: arXiv admin note: text overlap with arXiv:2104.12229 by other author

    Non-local Scan Consolidation for 3D Urban Scenes

    No full text
    Recent advances in scanning technologies, in particular devices that extract depth through active sensing, allow fast scanning of urban scenes. Such rapid acquisition incurs imperfections: large regions remain missing, significant variation in sampling density is common, and the data is often corrupted with noise and outliers. However, buildings often exhibit large scale repetitions and selfsimilarities. Detecting, extracting, and utilizing such large scale repetitions provide powerful means to consolidate the imperfect data. Our key observation is that the same geometry, when scanned multiple times over reoccurrences of instances, allow application of a simple yet effective non-local filtering. The multiplicity of the geometry is fused together and projected to a base-geometry defined by clustering corresponding surfaces. Denoising is applied by separating the process into off-plane and in-plane phases. We show that the consolidation of the reoccurrences provides robust denoising and allow reliable completion of missing parts. We present evaluation results of the algorithm on several LiDAR scans of buildings of varying complexity and styles.
    corecore